Environment Setup
Activate Libraries
library(dplyr)
library(tibble)
library(kableExtra)
library(DT)
library(readr)
library(dlookr)
library(pastecs)
library(stringr)
library(ggplot2)
library(treemapify)
library(paletteer)
library(sf)
library(usmap)
library(tidyr)
library(purrr)
library(tigris)
library(ggpattern)
library(ggrepel)
library(sessioninfo)
Initial Data Set
Read in data
agencies <- readr::read_csv('https://raw.githubusercontent.com/rfordatascience/tidytuesday/main/data/2025/2025-02-18/agencies.csv')
## Rows: 19166 Columns: 10
## ── Column specification ────────────────────────────────────────────────────────
## Delimiter: ","
## chr (6): ori, county, state_abbr, state, agency_name, agency_type
## dbl (2): latitude, longitude
## lgl (1): is_nibrs
## date (1): nibrs_start_date
##
## ℹ Use `spec()` to retrieve the full column specification for this data.
## ℹ Specify the column types or set `show_col_types = FALSE` to quiet this message.
View the data set
diagnose(agencies)
## # A tibble: 10 × 6
## variables types missing_count missing_percent unique_count unique_rate
## <chr> <chr> <int> <dbl> <int> <dbl>
## 1 ori char… 0 0 19166 1
## 2 county char… 0 0 2372 0.124
## 3 latitude nume… 1947 10.2 10492 0.547
## 4 longitude nume… 1947 10.2 10482 0.547
## 5 state_abbr char… 0 0 50 0.00261
## 6 state char… 0 0 50 0.00261
## 7 agency_name char… 0 0 14200 0.741
## 8 agency_type char… 1675 8.74 9 0.000470
## 9 is_nibrs logi… 0 0 2 0.000104
## 10 nibrs_start_date Date 4061 21.2 393 0.0205
The latitude and longitiude columns are
numeric data types, nibrs_start_date is a
date data type, is_nibrs is a
logical or Boolean (TRUE/FALSE) data type, and all the
other columns are character datatypes. There are 1,947
“NA” for the latitude and longitude variables, 1,645
for the agency_type varibales, and 4,061 for the nibrs_start_date
variables. Lastly, there are a total of nine agency types.
Aggregation: Agency Types
Summary Subset
I want to see a brief summary on agency type variations to answer, How do agency types vary?.
agency_summary <- agencies %>%
group_by(agency_type) %>%
summarise(count = n()) %>%
mutate(percentage = count / sum(count) * 100)
print(agency_summary)
## # A tibble: 9 × 3
## agency_type count percentage
## <chr> <int> <dbl>
## 1 City 11385 59.4
## 2 County 3027 15.8
## 3 Other 456 2.38
## 4 Other State Agency 803 4.19
## 5 State Police 897 4.68
## 6 Tribal 198 1.03
## 7 University or College 724 3.78
## 8 Unknown 1 0.00522
## 9 <NA> 1675 8.74
There are more agencies by “City” with 11,385 of them, which makes up a 59.4% dominance across the country. There are four more types (“County”, “Other State Agency”, “State Police”, and “University or College”) with a significant amount of agencies, one (“Other”) named type with an okay amount of agencies, one (“Tribal”) type which has a smaller amount of agencies, one (“Unknown”) named type with only a single agency, and one (“NA”) type that has 1,675 un-categorized agencies. Making this column with a total of nine unique agency types.
Plots
Summary bar plot
I will use a basic bar plot (flip to have categories on the y-axis) to see the variations:
This is an okay start, but it is very hard to see that the
“Unknown” agency type has one agency. I think using a
treemap for this agency_summary would be better for seeing
the proportions and hierarchies across the nine agency types.
Treemap
I’m going to use the treemapify package, just because it integrates with ggplot well.
ggplot(agency_summary,
aes(area = count,
fill = agency_type,
label = agency_type)) +
geom_treemap() + # create the structure
geom_treemap_text(fontface = "bold",
color = "white",
place = "center",
grow = TRUE,
reflow = TRUE,
size = 3) +
theme_minimal()
## Warning: Removed 1 row containing missing values or values outside the scale range
## (`geom_treemap_text()`).
This produced a catchy and clear treemap by default, but unfortunately, it did remove the “Unknown” agency type. Also the texts are outside of their nested rectangles and are slightly improperly formatted. I think shortening the longer names and concatenating the “NA” and “Unknown” values together as a single column, will also help to make the visual more appealing.
Testing
Test Concatenation Values
I’m going to test concatenating the agency_type values,
“Unknown” and “NA” as mentioned
previously.
agency_summary <- agency_summary %>%
mutate(agency_type = ifelse(agency_type %in% c("Unknown", NA),
"Unclassified",
agency_type)) %>%
group_by(agency_type) %>%
summarise(count = sum(count),
percentage = sum(percentage))
print(agency_summary)
## # A tibble: 8 × 3
## agency_type count percentage
## <chr> <int> <dbl>
## 1 City 11385 59.4
## 2 County 3027 15.8
## 3 Other 456 2.38
## 4 Other State Agency 803 4.19
## 5 State Police 897 4.68
## 6 Tribal 198 1.03
## 7 Unclassified 1676 8.74
## 8 University or College 724 3.78
Now the summary has only eight unique agency types, instead of the nine previously outputted. I’ll also shorten some of the agency type names.
Test Shortening Values
Here are the three values where I’m shortening their names: - “Other State Agency” - “State Police” - “University or College”
agency_summary <- agency_summary %>%
mutate(agency_type = case_when(
agency_type == "Other State Agency" ~ "Sta.Agency",
agency_type == "State Police" ~ "Sta.Police",
agency_type == "University or College" ~ "Uni./College",
TRUE ~ agency_type
))
print(agency_summary)
## # A tibble: 8 × 3
## agency_type count percentage
## <chr> <int> <dbl>
## 1 City 11385 59.4
## 2 County 3027 15.8
## 3 Other 456 2.38
## 4 Sta.Agency 803 4.19
## 5 Sta.Police 897 4.68
## 6 Tribal 198 1.03
## 7 Unclassified 1676 8.74
## 8 Uni./College 724 3.78
Of course this worked and I could stop right here, but I feel I should give a quick note in regards to the naming convention I’ve used.
I chose to shorten the “state” word to an “Sta.” abbreviation, but I must say it was a bit challenging to find a way to abbreviate this place name off the top of my head, so naturally, I Googled it, and saw that “St.” and “Sta.” came up as solutions to my challenge.
Given that the “St.” is already being represented as
“Saint” for the values in the county column; I
went with the “Sta.” abbreviation.
Now, I’ll implement both methods into the agencies data
set.
Final Data Set
Copy Main Data Set
df <- agencies %>%
mutate(agency_type = ifelse(
agency_type %in% c("Unknown", NA),
"Unclassified",
agency_type)) %>%
mutate(agency_type = case_when(
agency_type == "Other State Agency" ~ "Sta.Agency",
agency_type == "State Police" ~ "Sta.Police",
agency_type == "University or College" ~ "Uni./College",
TRUE ~ agency_type))
diagnose(df)
## # A tibble: 10 × 6
## variables types missing_count missing_percent unique_count unique_rate
## <chr> <chr> <int> <dbl> <int> <dbl>
## 1 ori char… 0 0 19166 1
## 2 county char… 0 0 2372 0.124
## 3 latitude nume… 1947 10.2 10492 0.547
## 4 longitude nume… 1947 10.2 10482 0.547
## 5 state_abbr char… 0 0 50 0.00261
## 6 state char… 0 0 50 0.00261
## 7 agency_name char… 0 0 14200 0.741
## 8 agency_type char… 0 0 8 0.000417
## 9 is_nibrs logi… 0 0 2 0.000104
## 10 nibrs_start_date Date 4061 21.2 393 0.0205
Glimpse the df Dataframe
as_tibble(df)
## # A tibble: 19,166 × 10
## ori county latitude longitude state_abbr state agency_name agency_type
## <chr> <chr> <dbl> <dbl> <chr> <chr> <chr> <chr>
## 1 AL0430200 LEE 32.6 -85.4 AL Alaba… Opelika Po… City
## 2 AL0430100 LEE 32.6 -85.5 AL Alaba… Auburn Pol… City
## 3 AL0430000 LEE 32.6 -85.4 AL Alaba… Lee County… County
## 4 AL0070100 BIBB 33.0 -87.1 AL Alaba… Centrevill… City
## 5 AL0070000 BIBB 32.9 -87.1 AL Alaba… Bibb Count… County
## 6 AL0070300 BIBB 33.0 -87.1 AL Alaba… West Bloct… City
## 7 AL0070200 BIBB 33.0 -87.1 AL Alaba… Brent Poli… City
## 8 AL0170000 CLAY 33.3 -85.9 AL Alaba… Clay Count… County
## 9 AL0170200 CLAY 33.3 -85.9 AL Alaba… Lineville … City
## 10 AL0170100 CLAY 33.3 -85.9 AL Alaba… Ashland Po… City
## # ℹ 19,156 more rows
## # ℹ 2 more variables: is_nibrs <lgl>, nibrs_start_date <date>
As you can see, the county column is in all capital
letters. Obviously, this column needs to be normalized.
Column Normalization
Here, I’m going to normalize the county column to title
case and then view it.
df <- df %>%
mutate(county = str_to_title(str_trim(county)))
head(setdiff(df$county,
agencies$county), 5)
## [1] "Lee" "Bibb" "Clay" "Dale" "Hale"
Unique Counties
I would like to know, how many unique counties are in the
county column?
unique_counties <- length(unique(df$county))
print(unique_counties)
## [1] 2372
Well, there are 2,372 unique counties in the
df data set. But, now I want to rename the
county column to state_county.
Rename Column
So, let’s start with viewing the initial column names, and then I can do the transformation and view the final results.
colnames(df)
## [1] "ori" "county" "latitude" "longitude"
## [5] "state_abbr" "state" "agency_name" "agency_type"
## [9] "is_nibrs" "nibrs_start_date"
df <- df %>%
rename(state_county = county)
colnames(df)
## [1] "ori" "state_county" "latitude" "longitude"
## [5] "state_abbr" "state" "agency_name" "agency_type"
## [9] "is_nibrs" "nibrs_start_date"
Great! The state county column has been renamed successfully.
Redo Summary Subset
With the previous naming convention methods I need to redo the
agency_summary table using the df data set as
its source.
agency_summary <- df %>%
group_by(agency_type) %>%
summarise(count = n()) %>%
mutate(percentage = count / sum(count) * 100)
print(agency_summary)
## # A tibble: 8 × 3
## agency_type count percentage
## <chr> <int> <dbl>
## 1 City 11385 59.4
## 2 County 3027 15.8
## 3 Other 456 2.38
## 4 Sta.Agency 803 4.19
## 5 Sta.Police 897 4.68
## 6 Tribal 198 1.03
## 7 Unclassified 1676 8.74
## 8 Uni./College 724 3.78
As you can see, the final table has implemented all the cleaning nad normalizing successfully.
Viz: Treemap
Final visualization
With doing all the above I can make the final viz to answer the question, How do agency types vary?
If you remember, there are eight unique agency types, so I’m going to first extract eight dynamic colors from the harmo.pal color scheme and display the hexadecimal values for them, too.
harmo_colors <- paletteer_dynamic("cartography::harmo.pal", 8)
print(harmo_colors)
## <colors>
## #F8D6A8FF #E2BCA4FF #CCA3A0FF #B6889CFF #996697FF #7B4492FF #4D3875FF #1E2F57FF
Now, I’ll apply the custom color scheme to the agency type summary I redid earlier.
ggplot(agency_summary,
aes(area = count,
fill = agency_type,
label = agency_type)) +
geom_treemap() +
geom_treemap_text(fontface = "bold",
color = "white",
place = "center",
size = 0.75,
grow = TRUE,
reflow = TRUE,
min.size = 0.75) +
scale_fill_manual(values = harmo_colors,
name = "Agency Type",
labels = c("City",
"County",
"Other",
"State Agency",
"State Police",
"Tribal",
"Unclassified",
"University or College")) +
theme_minimal() +
theme(legend.position = "right")
Yup, I think this looks good!
Recap
Question:
How do agency types vary?
Answer:
The treemap shows that the overall amount of agencies are located within the “City” agency type, and that the “County” and “Unclassified” agency types follow afterwards (in that order) as being the ones with the most dominance across the United States.
Which, I think makes sense since geographically there are significantly more cities than counties, and segues into how these agency types are distributed.
But why would the FBI branch out by city instead of county?
Distribution: Geographical Agency Types
I’m now going to look into, how the agencies are distributed
geographically within each state? But, first let’s revisit the
df data set.
diagnose(df)
## # A tibble: 10 × 6
## variables types missing_count missing_percent unique_count unique_rate
## <chr> <chr> <int> <dbl> <int> <dbl>
## 1 ori char… 0 0 19166 1
## 2 state_county char… 0 0 2372 0.124
## 3 latitude nume… 1947 10.2 10492 0.547
## 4 longitude nume… 1947 10.2 10482 0.547
## 5 state_abbr char… 0 0 50 0.00261
## 6 state char… 0 0 50 0.00261
## 7 agency_name char… 0 0 14200 0.741
## 8 agency_type char… 0 0 8 0.000417
## 9 is_nibrs logi… 0 0 2 0.000104
## 10 nibrs_start_date Date 4061 21.2 393 0.0205
Remember, that the latitude and longitude
columns have 1,947 missing values. But, to confirm I will use the
diagnose() function again.
Note: there can’t be any missing coordinates for the mapping package(s) I’m considering using.
So, I’ll have to filter out or impute those missing values.
But, first let me add two additional columns (b4 I forget).
One is to count the agency_type and the other is to get the
overall percentages of the agency types in/for each state.
This is nothing major just some basic aggregations.
df <- df %>%
group_by(state,
agency_type) %>%
mutate(agency_count = n())
glimpse(df$agency_count)
## int [1:19166] 313 313 67 313 67 313 313 67 313 313 ...
df <- df %>%
group_by(state) %>%
mutate(agency_perc = agency_count / n() * 100.00)
glimpse(df$agency_perc)
## num [1:19166] 70.2 70.2 15 70.2 15 ...
Good, both columns have been aggregated and saved to the
df data set, successfully.
Dataframe: df
For now let me view the state_county column’s top 20
records and bottom 30 records.
state_countyList <- tibble(county = df$state_county)
print(state_countyList, 20)
## # A tibble: 19,166
## # × 1
## county
## <chr>
## 1 Lee
## 2 Lee
## 3 Lee
## 4 Bibb
## 5 Bibb
## 6 Bibb
## 7 Bibb
## 8 Clay
## 9 Clay
## 10 Clay
## # ℹ 19,156 more
## # rows
Same table just now scrollable:
# state_countyList %>%
# knitr::kable(format = "html") %>%
# scroll_box(height = "400px")
Now with a quick scroll I noticed “Not Specified” and abbreviated
place naming conventions that I prefer to have spelled out (e.g.,
"St Clair"). So I’ll do a basic normalization for white
spaces and writing out abbreviated place names.
I will not remove the “Not Specified” or
I’ll also view the df data set’s highest counts.
df %>%
group_by(df$state_county) %>%
tally() %>%
arrange(desc(n))
## # A tibble: 2,372 × 2
## `df$state_county` n
## <chr> <int>
## 1 Not Specified 471
## 2 Jefferson 202
## 3 Washington 193
## 4 Montgomery 174
## 5 Franklin 161
## 6 Cook 147
## 7 Jackson 131
## 8 Allegheny 123
## 9 Orange 120
## 10 Los Angeles 117
## # ℹ 2,362 more rows
As you can see there are 2,362 county names.
Normalize Spatial Data Sets
df <- df %>%
mutate(state_county = str_to_title(str_trim(state_county))) %>%
mutate(state_county = str_replace_all(state_county,
"\\bSt\\b",
"Saint")) %>%
mutate(state_county = str_remove_all(state_county,
"\\b(Planning Region|Region)\\b")) %>%
mutate(state_county = str_trim(state_county)) %>%
filter(state_county != "")
datatable(df)
## Warning in instance$preRenderHook(instance): It seems your data is too big for
## client-side DataTables. You may consider server-side processing:
## https://rstudio.github.io/DT/server.html
The df data sets have been successfully normalized.
View df Data Set
Let me view only the missing counts again for the df
data set.
df %>%
diagnose() %>%
select(-unique_count,
-unique_rate) %>%
filter(missing_count > 0) %>%
arrange(desc(missing_count))
## # A tibble: 134 × 6
## variables types state data_count missing_count missing_percent
## <chr> <chr> <chr> <int> <dbl> <dbl>
## 1 nibrs_start_date Date Pennsylvan… 1477 1306 88.4
## 2 nibrs_start_date Date Florida 758 625 82.5
## 3 nibrs_start_date Date New York 590 405 68.6
## 4 latitude numeric Texas 1477 393 26.6
## 5 longitude numeric Texas 1477 393 26.6
## 6 nibrs_start_date Date Illinois 923 288 31.2
## 7 latitude numeric Pennsylvan… 1477 256 17.3
## 8 longitude numeric Pennsylvan… 1477 256 17.3
## 9 nibrs_start_date Date New Jersey 578 232 40.1
## 10 latitude numeric California 859 228 26.5
## # ℹ 124 more rows
So, the latitude, longitude, and
nibrs_start_date are the only columns that have missing
values.
Finalize Spatial Data Set
df_with_geom <- df %>%
mutate(
geometry = map2(longitude, latitude, function(lon, lat) {
if (!is.na(lon) & !is.na(lat)) st_point(c(lon, lat)) else st_geometrycollection()
})
)
df_sf <- st_sf(df_with_geom,
crs = 4326)
cat("Total rows:", nrow(df_sf), "\n")
## Total rows: 19166
cat("Rows with geometry:", sum(!st_is_empty(df_sf$geometry)), "\n")
## Rows with geometry: 17219
cat("Rows missing geometry:", sum(st_is_empty(df_sf$geometry)), "\n")
## Rows missing geometry: 1947
colnames(df_sf)
## [1] "ori" "state_county" "latitude" "longitude"
## [5] "state_abbr" "state" "agency_name" "agency_type"
## [9] "is_nibrs" "nibrs_start_date" "agency_count" "agency_perc"
## [13] "geometry"
diagnose(df_sf)
## # A tibble: 62 × 9
## variables types data_count missing_count missing_percent unique_count
## <chr> <chr> <int> <dbl> <dbl> <int>
## 1 ori character 19166 0 0 19166
## 2 state_county character 19166 0 0 2372
## 3 latitude numeric 19166 1947 10.2 10492
## 4 longitude numeric 19166 1947 10.2 10482
## 5 state_abbr character 19166 0 0 50
## 6 state character 446 0 0 1
## 7 state character 39 0 0 1
## 8 state character 126 0 0 1
## 9 state character 316 0 0 1
## 10 state character 859 0 0 1
## # ℹ 52 more rows
## # ℹ 3 more variables: unique_rate <dbl>, geometry <MULTIPOINT [°]>,
## # variable <MULTIPOINT [°]>
My objective here is to retain every record, assign real geometry where possible, and avoid crashing when the geometry could not be created.
Viz: Distribution Map
I will create a color scheme for the agency_type using the harmo.pal color scheme again. This is a basic plot.
agency_type_harmo <- paletteer_dynamic("cartography::harmo.pal", 8)
dfPlot <- df_sf %>%
mutate(has_geometry = !st_is_empty(geometry)) %>%
mutate(is_point = st_geometry_type(geometry) == "POINT") %>%
mutate(coords = map_if(geometry,
is_point, st_coordinates,
.else = ~ matrix(c(NA_real_, NA_real_),
ncol = 2)
),
lon = map_dbl(coords, 1),
lat = map_dbl(coords, 2),
plausible_coords = !is.na(lat) & !is.na(lon) &
between(lon, -130, -60) & between(lat, 20, 55)
)
us_states <- states(cb = TRUE,
year = 2022) %>%
st_transform(crs = 4326)
## | | | 0% | | | 1% | |= | 1% | |= | 2% | |== | 2% | |== | 3% | |== | 4% | |=== | 4% | |=== | 5% | |==== | 5% | |==== | 6% | |===== | 6% | |===== | 7% | |===== | 8% | |====== | 8% | |====== | 9% | |======= | 10% | |======= | 11% | |======== | 11% | |======== | 12% | |========= | 12% | |========= | 13% | |========== | 14% | |========== | 15% | |=========== | 15% | |=========== | 16% | |============ | 17% | |============ | 18% | |============= | 18% | |============= | 19% | |============== | 20% | |=============== | 21% | |=============== | 22% | |================ | 22% | |================ | 23% | |================= | 24% | |================= | 25% | |================== | 25% | |================== | 26% | |=================== | 27% | |=================== | 28% | |==================== | 28% | |==================== | 29% | |===================== | 29% | |===================== | 30% | |====================== | 31% | |====================== | 32% | |======================= | 32% | |======================= | 33% | |======================== | 34% | |======================== | 35% | |========================= | 35% | |========================= | 36% | |========================== | 37% | |=========================== | 38% | |=========================== | 39% | |============================ | 39% | |============================ | 40% | |============================ | 41% | |============================= | 41% | |============================= | 42% | |============================== | 43% | |=============================== | 44% | |=============================== | 45% | |================================ | 45% | |================================ | 46% | |================================= | 47% | |================================== | 48% | |================================== | 49% | |=================================== | 49% | |=================================== | 50% | |==================================== | 51% | |==================================== | 52% | |===================================== | 52% | |===================================== | 53% | |====================================== | 54% | |====================================== | 55% | |======================================= | 55% | |======================================= | 56% | |======================================== | 56% | |======================================== | 57% | |========================================= | 58% | |========================================= | 59% | |========================================== | 59% | |========================================== | 60% | |=========================================== | 61% | |=========================================== | 62% | |============================================ | 62% | |============================================ | 63% | |============================================= | 64% | |============================================= | 65% | |============================================== | 65% | |============================================== | 66% | |=============================================== | 67% | |=============================================== | 68% | |================================================ | 68% | |================================================ | 69% | |================================================= | 70% | |================================================= | 71% | |================================================== | 71% | |================================================== | 72% | |=================================================== | 73% | |==================================================== | 74% | |==================================================== | 75% | |===================================================== | 75% | |===================================================== | 76% | |====================================================== | 77% | |====================================================== | 78% | |======================================================= | 78% | |======================================================= | 79% | |======================================================== | 80% | |======================================================== | 81% | |========================================================= | 81% | |========================================================= | 82% | |========================================================== | 82% | |========================================================== | 83% | |=========================================================== | 84% | |=========================================================== | 85% | |============================================================ | 85% | |============================================================ | 86% | |============================================================= | 87% | |============================================================= | 88% | |============================================================== | 88% | |============================================================== | 89% | |=============================================================== | 90% | |=============================================================== | 91% | |================================================================ | 91% | |================================================================ | 92% | |================================================================= | 92% | |================================================================= | 93% | |================================================================== | 94% | |================================================================== | 95% | |=================================================================== | 95% | |=================================================================== | 96% | |==================================================================== | 97% | |==================================================================== | 98% | |===================================================================== | 98% | |===================================================================== | 99% | |======================================================================| 100%
ggplot() +
geom_sf(data = us_states,
fill = "gray98",
color = "gray80",
size = 0.3) +
geom_sf(data = dfPlot %>%
filter(plausible_coords),
aes(fill = agency_count),
shape = 21,
color = "white",
stroke = 0.2,
alpha = 0.9,
size = 2) +
geom_sf(data = dfPlot %>%
filter(!plausible_coords),
color = "red",
shape = 4,
size = 2,
alpha = 0.6) +
scale_fill_gradientn(colors = agency_type_harmo, name = "Agency Count") +
labs(title = "Agency Types and Counts",
subtitle = "Higher counts are deeper in color") +
coord_sf(xlim = c(-125, -65),
ylim = c(25, 50),
expand = TRUE) +
theme_void() +
theme(legend.position = "bottom")
Session Info
sessioninfo::session_info()
## ─ Session info ───────────────────────────────────────────────────────────────
## setting value
## version R version 4.5.1 (2025-06-13)
## os macOS Sequoia 15.6
## system aarch64, darwin20
## ui X11
## language (EN)
## collate en_US.UTF-8
## ctype en_US.UTF-8
## tz America/New_York
## date 2025-08-14
## pandoc 3.4 @ /Applications/RStudio.app/Contents/Resources/app/quarto/bin/tools/aarch64/ (via rmarkdown)
## quarto 1.7.33 @ /usr/local/bin/quarto
##
## ─ Packages ───────────────────────────────────────────────────────────────────
## ! package * version date (UTC) lib source
## P bit 4.6.0 2025-03-06 [?] RSPM
## P bit64 4.6.0-1 2025-01-16 [?] RSPM
## P bookdown 0.43 2025-04-15 [?] RSPM
## P boot 1.3-31 2024-08-28 [3] CRAN (R 4.5.1)
## P bslib 0.9.0 2025-01-30 [?] CRAN (R 4.5.0)
## P cachem 1.1.0 2024-05-16 [?] CRAN (R 4.5.0)
## P class 7.3-23 2025-01-01 [3] CRAN (R 4.5.1)
## P classInt 0.4-11 2025-01-08 [?] RSPM
## P cli 3.6.5 2025-04-23 [?] CRAN (R 4.5.0)
## P codetools 0.2-20 2024-03-31 [3] CRAN (R 4.5.1)
## P crayon 1.5.3 2024-06-20 [?] RSPM
## P crosstalk 1.2.1 2023-11-23 [?] RSPM
## P curl 6.4.0 2025-06-22 [?] RSPM
## P DBI 1.2.3 2024-06-02 [?] RSPM
## P digest 0.6.37 2024-08-19 [?] CRAN (R 4.5.0)
## P dlookr * 0.6.3 2024-02-07 [?] RSPM
## P dplyr * 1.1.4 2023-11-17 [?] RSPM
## P DT * 0.33 2024-04-04 [?] RSPM
## P e1071 1.7-16 2024-09-16 [?] RSPM
## P evaluate 1.0.4 2025-06-18 [?] CRAN (R 4.5.0)
## P extrafont 0.19 2023-01-18 [?] RSPM
## P extrafontdb 1.0 2012-06-11 [?] RSPM
## P farver 2.1.2 2024-05-13 [?] RSPM
## P fastmap 1.2.0 2024-05-15 [?] CRAN (R 4.5.0)
## P fontBitstreamVera 0.1.1 2017-02-01 [?] RSPM
## P fontLiberation 0.1.0 2016-10-15 [?] RSPM
## P fontquiver 0.2.1 2017-02-01 [?] RSPM
## P gdtools 0.4.2 2025-03-27 [?] RSPM
## P generics 0.1.4 2025-05-09 [?] RSPM
## P ggfittext 0.10.2 2024-02-01 [?] RSPM
## P ggpattern * 1.1.4 2025-01-29 [?] RSPM
## P ggplot2 * 3.5.2 2025-04-09 [?] RSPM
## P ggrepel * 0.9.6 2024-09-07 [?] RSPM
## P glue 1.8.0 2024-09-30 [?] CRAN (R 4.5.0)
## P gridExtra 2.3 2017-09-09 [?] RSPM
## P gtable 0.3.6 2024-10-25 [?] RSPM
## P hms 1.1.3 2023-03-21 [?] RSPM
## P hrbrthemes 0.8.7 2024-03-04 [?] RSPM
## P htmltools 0.5.8.1 2024-04-04 [?] CRAN (R 4.5.0)
## P htmlwidgets 1.6.4 2023-12-06 [?] RSPM
## P httpuv 1.6.16 2025-04-16 [?] RSPM
## P httr 1.4.7 2023-08-15 [?] RSPM
## P jquerylib 0.1.4 2021-04-26 [?] CRAN (R 4.5.0)
## P jsonlite 2.0.0 2025-03-27 [?] CRAN (R 4.5.0)
## P kableExtra * 1.4.0 2024-01-24 [?] RSPM
## P KernSmooth 2.23-26 2025-01-01 [3] CRAN (R 4.5.1)
## P knitr 1.50 2025-03-16 [?] CRAN (R 4.5.0)
## P labeling 0.4.3 2023-08-29 [?] RSPM
## P later 1.4.2 2025-04-08 [?] RSPM
## P lifecycle 1.0.4 2023-11-07 [?] CRAN (R 4.5.0)
## P magrittr 2.0.3 2022-03-30 [?] CRAN (R 4.5.0)
## P mime 0.13 2025-03-17 [?] CRAN (R 4.5.0)
## P pagedown 0.22 2025-01-07 [?] RSPM
## P paletteer * 1.6.0 2024-01-21 [?] RSPM
## P pastecs * 1.4.2 2024-02-01 [?] RSPM
## P pillar 1.11.0 2025-07-04 [?] RSPM
## P pkgconfig 2.0.3 2019-09-22 [?] RSPM
## P prismatic 1.1.2 2024-04-10 [?] RSPM
## P promises 1.3.3 2025-05-29 [?] RSPM
## P proxy 0.4-27 2022-06-09 [?] RSPM
## P purrr * 1.1.0 2025-07-10 [?] CRAN (R 4.5.0)
## P R6 2.6.1 2025-02-15 [?] CRAN (R 4.5.0)
## P rappdirs 0.3.3 2021-01-31 [?] CRAN (R 4.5.0)
## P RColorBrewer 1.1-3 2022-04-03 [?] RSPM
## P Rcpp 1.1.0 2025-07-02 [?] RSPM
## P reactable 0.4.4 2023-03-12 [?] RSPM
## P readr * 2.1.5 2024-01-10 [?] RSPM
## P rematch2 2.1.2 2020-05-01 [?] RSPM
## P rlang 1.1.6 2025-04-11 [?] CRAN (R 4.5.0)
## P rmarkdown 2.29 2024-11-04 [?] CRAN (R 4.5.0)
## P rmdformats 1.0.4 2022-05-17 [?] RSPM
## P rstudioapi 0.17.1 2024-10-22 [?] RSPM
## P Rttf2pt1 1.3.12 2023-01-22 [?] RSPM
## P s2 1.1.9 2025-05-23 [?] RSPM
## P sass 0.4.10 2025-04-11 [?] CRAN (R 4.5.0)
## P scales 1.4.0 2025-04-24 [?] RSPM
## P sessioninfo * 1.2.3 2025-02-05 [?] RSPM
## P sf * 1.0-21 2025-05-15 [?] RSPM
## P shiny 1.11.1 2025-07-03 [?] RSPM
## P showtext 0.9-7 2024-03-02 [?] RSPM
## P showtextdb 3.0 2020-06-04 [?] RSPM
## P stringi 1.8.7 2025-03-27 [?] RSPM
## P stringr * 1.5.1 2023-11-14 [?] RSPM
## P svglite 2.2.1 2025-05-12 [?] RSPM
## P sysfonts 0.8.9 2024-03-02 [?] RSPM
## P systemfonts 1.2.3 2025-04-30 [?] RSPM
## P textshaping 1.0.1 2025-05-01 [?] RSPM
## P tibble * 3.3.0 2025-06-08 [?] RSPM
## P tidyr * 1.3.1 2024-01-24 [?] RSPM
## P tidyselect 1.2.1 2024-03-11 [?] RSPM
## P tigris * 2.2.1 2025-04-16 [?] RSPM
## P treemapify * 2.5.6 2023-09-30 [?] RSPM
## P tzdb 0.5.0 2025-03-15 [?] RSPM
## P units 0.8-7 2025-03-11 [?] RSPM
## P usmap * 0.8.0 2025-05-28 [?] RSPM
## P utf8 1.2.6 2025-06-08 [?] RSPM
## P uuid 1.2-1 2024-07-29 [?] RSPM
## P vctrs 0.6.5 2023-12-01 [?] RSPM
## P viridisLite 0.4.2 2023-05-02 [?] RSPM
## P vroom 1.6.5 2023-12-05 [?] RSPM
## P withr 3.0.2 2024-10-28 [?] RSPM
## P wk 0.9.4 2024-10-11 [?] RSPM
## P xfun 0.52 2025-04-02 [?] CRAN (R 4.5.0)
## P xml2 1.3.8 2025-03-14 [?] RSPM
## P xtable 1.8-4 2019-04-21 [?] RSPM
## P yaml 2.3.10 2024-07-26 [?] CRAN (R 4.5.0)
##
## [1] /Volumes/externalSamsung/externalHome/venvs/agency-fbi-tidytuesday/renv/library/macos/R-4.5/aarch64-apple-darwin20
## [2] /Volumes/externalSamsung/externalHome/Library/Caches/org.R-project.R/R/renv/sandbox/macos/R-4.5/aarch64-apple-darwin20/4cd76b74
## [3] /Library/Frameworks/R.framework/Versions/4.5-arm64/Resources/library
##
## * ── Packages attached to the search path.
## P ── Loaded and on-disk path mismatch.
##
## ──────────────────────────────────────────────────────────────────────────────